23 research outputs found

    Interaction in motion: designing truly mobile interaction

    Get PDF
    The use of technology while being mobile now takes place in many areas of people’s lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such “interaction in motion”, which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being “mobile” actually means for interaction, and help practitioners design truly mobile interactions

    Underwater reconstruction using depth sensors

    Get PDF
    In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate

    Gesture bike: examining projection surfaces and turn signal systems for urban cycling

    Get PDF
    Interactive surfaces could be employed in urban environments to make people more aware of moving vehicles, showing drivers’ intention and the subsequent position of vehicles. To explore the usage of projections while cycling, we created a system that displays a map for navigation and signals cyclist intention. The first experiment compared the task of map navigation on a display projected on a road surface in front of the bicycle with a head-up-display (HUD) consisting of a projection on a windshield. The HUD system was considered safer and easier to use. In our second experiment, we used projected surfaces to implement concepts inspired by Gibson’s perception theory of driving that were combined with detection of conventional cycling gestures to signal and visualize turning intention. The comparison of our system with an off-the-shelf turn signal system showed that gesture input was easier to use. A web-based follow-up study based on the recording of the two signalling systems from the perspective of participants in traffic showed that with the gesture-projector system it was easier to understand and predict the cyclist intention

    Body-Centric Projections: Spatial projections relative to the body for map navigation

    No full text
    Technological advancement has led to the steady improvement of brightness and decrease in size of pico projectors. These small devices are available as stand-alone projectors for personal use or are embedded in consumer electronics, ranging from smart phones, smart glasses, to video cameras. Portable projected displays provide opportunities in creating feasible, desirable, and viable wearable devices that present information. The main contribution of this thesis is to develop and evaluate a set of working prototypes that present information in new ways around the human body for the task of map navigation.Based on experiments using these prototypes, we gain insights and present a design space for mobile visual interfaces from a body-centric human-computer interaction perspective.First, we design interfaces for an architectural application involving environment projection and explore reconstruction of physical surfaces in different contexts. Environment-centric projection is employed to create interfaces in which the user is performing tasks inside a limited physical space augmented with information. Second, we explore the placement of information around the human body while cycling and walking for the task of map navigation in an urban environment.We evaluate these body-centric interfaces through field experiments.Findings from our experiments show that, for instance, while cycling road projection is considered safer and easier to use than a mobile phone, a head-up display is considered safer than a projected display on the road.The implications of display placement could inform the design of visual interfaces for bike design, such as bike sharing systems that are already supporting map navigation using tablets mounted under handlebars.Furthermore, projections on the road could replace headlights to make people more aware of moving vehicles, showing drivers\u27 intentions and the subsequent position of vehicles.Then, we propose the concept of a "wearable mid-air display", a device that presents dynamic images floating in mid-air relative to a mobile user.Such devices may enable new input and output modalities compared to current mobile devices, and seamlessly offer information on the go.A functional prototype was developed for the purpose of understanding these modalities in more detail, including suitable applications and device placement.Experiment results investigate the use of a wearable mid-air display for map navigation

    Motor learning in a mixed reality environment

    No full text
    ABSTRACT The traditional method for acquiring a motor skill is to focus on ones limbs while performing the movement. A theory of motor learning validated during the last ten years is contradicting the traditional method. The new theory states that it is more beneficial to focus on external markers outside the human body and predicts acquiring the motor skill better and faster. Using a mixed reality environment, we tested if the new motor learning approach is also valid using a virtual trainer and virtual markers

    Motor learning in a mixed reality environment

    No full text
    The traditional method for acquiring a motor skill is to focus on ones limbs while performing the movement. A theory of motor learning validated during the last ten years is contradicting the traditional method. The new theory states that it is more beneficial to focus on external markers outside the human body and predicts acquiring the motor skill better and faster. Using a mixed reality environment, we tested if the new motor learning approach is also valid using a virtual trainer and virtual markers

    Designing seamless displays for interaction in motion

    No full text
    Most mobile interfaces today are designed with a "stop to interact" paradigm, which means that when a user has a device notification, they are expected to stop and attend to a screen in their pocket or on their wrist. An alternative is to design wearable displays to minimize attention required while in motion. We review research on attention and perception theory, recent display technology and applications of interaction in motion. Based on display layout principles and guidelines from literature, we created prototypes and experiments with mobile projected and mid-air displays for the task of map navigation while walking and cycling

    Embodied computation in soft gripper

    No full text
    We designed, created and tested an underactuated soft gripper able to hold everyday objects of various shapes and sizes without using complex hardware or control algorithms, but rather by combining sheets of flexible plastic materials and a single servo motor. Starting with a prototype where simple actuation performs complex varied gripping operations solely through the material system and its inherent physical computation, the paper discusses how embodied computation might exist in a material aggregate by tuning and balancing its morphology and material properties

    Propositional Architecture using Induced Representation

    No full text
    The paper describes a method and an approach to using sensor data, machine-learning and pattern recognition for proposing and guiding immediate modifications to the existing built environment. The proposed method; Induced Representation, consists of a few steps which we have identified as crucial for such an approach. The steps are A: data collection from the environment, B: machine cognition, learning, prediction, and, c: proposition, visualization, and embodied representations for quick implementation. In the paper we outline the factual and theoretical basis for this approach, and we present and discuss three experiments that each deal with the steps A, B and C
    corecore